16 research outputs found

    Model Checking at Scale: Automated Air Traffic Control Design Space Exploration

    Get PDF
    Many possible solutions, differing in the assumptions and implementations of the components in use, are usually in competition during early design stages. Deciding which solution to adopt requires considering several trade-offs. Model checking represents a possible way of comparing such designs, however, when the number of designs is large, building and validating so many models may be intractable. During our collaboration with NASA, we faced the challenge of considering a design space with more than 20,000 designs for the NextGen air traffic control system. To deal with this problem, we introduce a compositional, modular, parameterized approach combining model checking with contract-based design to automatically generate large numbers of models from a possible set of components and their implementations. Our approach is fully automated, enabling the generation and validation of all target designs. The 1,620 designs that were most relevant to NASA were analyzed exhaustively. To deal with the massive amount of data generated, we apply novel data-analysis techniques that enable a rich comparison of the designs, including safety aspects. Our results were validated by NASA system designers, and helped to identify novel as well as known problematic configurations

    A Formal Foundation of FDI Design via Temporal Epistemic Logic

    No full text
    Autonomous systems must be able to detect and promptly react to faults. Fault Detection and Identification components (FDI) are in charge of detecting the occurrence of faults. The FDI depends on the concrete design of the system, needs to take into account how faults might interact, and can only have a partial view of the run-time state through sensors. For these reasons, the development of the FDI and certification of its correctness and quality are difficult tasks. This difficulty is compounded by the fact that current approaches for verification of the FDI rely on manual inspection and testing. Motivated by industrial case-studies from the European Space Agency, this thesis proposes a formal foundation for FDI design that covers specification, validation, verification, and synthesis. The Alarm Specification Language (ASLk) is introduced in order to formally capture a set of interesting and desirable properties of the FDI components. ASLk is equipped with a semantics based on Temporal Epistemic Logic, thus enabling reasoning about partially observable systems. Automated reasoning techniques can then be applied to perform validation, verification, and synthesis of the FDI. This formal process guarantees that the generated FDI satisfies the designer expectations. The problems deriving from this process were out of reach for existing model-checking techniques. Therefore, we develop novel and efficient techniques for model-checking temporal epistemic logic over infinite state systems

    A Lazy Approach to Temporal Epistemic Logic Model Checking

    No full text
    Temporal Epistemic Logic is used to reason about the evolution of knowledge over time. A notable example is the temporal epistemic logic KL1, which is used to model what a reasoner can infer about the state of a dynamic system by using available observations. Applications of KL1 span from security (verification of cryptography protocols and information flow) to diagnostic systems (fault detection and diagnosability). In this paper, we tackle the verification of KL1 properties under observational semantics, by proposing an effective approach that is able to deal with both finite and infinite state systems. The denotation of the epistemic atoms is computed in a lazy way, driven by the counter-examples obtained from model checking an abstraction of the property. We analyze the approach on a comprehensive set of finite- and infinite-state benchmarks from the literature, evaluate the effectiveness of various optimizations, and demonstrate that our approach outperforms existing approaches

    Model-based Safety Assessment of a Triple Modular Generator with XSAP

    No full text
    The system design process needs to cope with the increasing complexity and size of systems, motivating the replacement of labor intensive manual techniques with automated and semi-automated approaches. Recently, formal methods techniques, such as model-based verification and safety assessment, have been increasingly used to model systems under fault and to analyze them, generating artifacts such as fault trees and FMEA tables. In this paper, we show how to apply model-based techniques to a realistic case study from the avionics domain: a high integrity power distribution system, the Triple Modular Generator (TMG). The TMG is composed of a redundant and reconfigurable plant and a controller that must guarantee a high level of reliability. The case study is a significant challenge, from the modeling perspective, since it implements a complex reconfiguration policy, specified via a number of requirements in natural language, including a set of mutually dependent and potentially conflicting priority constraints. Moreover, from the verification standpoint, the controller must be able to handle an exponential number of possible faulty configurations. Our contribution is twofold. First, we formalize and validate the requirements and, using a constraint-based modeling style, we synthesize a correct by construction controller, avoiding the enumeration of all possible fault configurations, as is currently done by manual approaches. Second, we describe a comprehensive methodology and process, supported by the XSAP safety analysis platform that targets the modeling and safety assessment of faulty systems. Using XSAP, we are able to automatically extract minimal cut sets for the TMG. We demonstrate the scalability of our approach by analyzing a parametric version of the TMG case study that contains more than 700 variables and 90 faults

    Towards Pareto-optimal parameter synthesis for monotonie cost functions

    No full text
    Designers are often required to explore alternative solutions, trading off along different dimensions (e.g., power consumption, weight, cost, reliability, response time). Such exploration can be encoded as a problem of parameter synthesis, i.e., finding a parameter valuation (representing a design solution) such that the corresponding system satisfies a desired property. In this paper, we tackle the problem of parameter synthesis with multi-dimensional cost functions by finding solutions that are in the Pareto front: in the space of best trade-offs possible. We propose several algorithms, based on IC3, that interleave in various ways the search for parameter valuations that satisfy the property, and the optimization with respect to costs. The most effective one relies on the reuse of inductive invariants and on the extraction of unsatisfiable cores to accelerate convergence. Our experimental evaluation shows the feasibility of the approach on practical benchmarks from diagnosability synthesis and productline engineering, and demonstrates the importance of a tight integration between model checking and cost optimization

    Diagnosability of fair transition systems

    No full text
    The integrity of complex dynamic systems often relies on the ability to detect, during operation, the occurrence of faults, or, in other words, to diagnose the system. The feasibility of this task, also known as diagnosability, depends on the nature of the system dynamics, the impact of faults, and the availability of a suitable set of sensors. Standard techniques for analyzing the diagnosability problem rely on a model of the system and on proving the absence of a faulty trace that cannot be distinguished by a non-faulty one (this pair of traces is called critical pair). In this paper, we tackle the problem of verifying diagnosability under the presence of fairness conditions. These extend the expressiveness of the system models enabling the specification of assumptions on the system behavior such as the infinite occurrence of observations and/or faults. We adopt a comprehensive framework that encompasses fair transition systems, temporally extended fault models, delays between the occurrence of a fault and its detection, and rich operational contexts. We show that in presence of fairness the definition of diagnosability has several interesting variants, and discuss the relative strengths and the mutual relationships. We prove that the existence of critical pairs is not always sufficient to analyze diagnosability, and needs to be generalized to critical sets. We define new notions of critical pairs, called ribbon-shape, with special looping conditions to represent the critical sets. Based on these findings, we provide algorithms to prove the diagnosability under fairness. The approach is built on top of the classical twin plant construction, and generalizes it to cover the various forms of diagnosability and find sufficient delays. The proposed algorithms are implemented within thexSAPplatform for safety analysis, leveraging efficient symbolic model checking primitives. An experimental evaluation on a heterogeneous set of realistic benchmarks from various application domains demonstrates the effectiveness of the approach
    corecore